157 research outputs found

    Sensitivity optimization in quantum parameter estimation

    Get PDF
    We present a general framework for sensitivity optimization in quantum parameter estimation schemes based on continuous (indirect) observation of a dynamical system. As an illustrative example, we analyze the canonical scenario of monitoring the position of a free mass or harmonic oscillator to detect weak classical forces. We show that our framework allows the consideration of sensitivity scheduling as well as estimation strategies for non-stationary signals, leading us to propose corresponding generalizations of the Standard Quantum Limit for force detection.Comment: 15 pages, RevTe

    Adiabatic elimination in quantum stochastic models

    Get PDF
    We consider a physical system with a coupling to bosonic reservoirs via a quantum stochastic differential equation. We study the limit of this model as the coupling strength tends to infinity. We show that in this limit the solution to the quantum stochastic differential equation converges strongly to the solution of a limit quantum stochastic differential equation. In the limiting dynamics the excited states are removed and the ground states couple directly to the reservoirs.Comment: 17 pages, no figures, corrected mistake

    Global Properties of Solar Flares

    Full text link

    Image Registration and Fusion for Interventional MRI Guided Thermal Ablation of the Prostate Cancer

    Full text link
    Abstract. We are investigating interventional MRI (iMRI) guided thermal ablation treatment of the prostate cancer. Functional images such as SPECT can detect and localize tumor in the prostate not reliably seen in MRI. We intend to combine the advantages of SPECT with iMRI-guided treatments. Our concept is to first register the low-resolution SPECT with a high resolution MRI volume. Then by registering the high-resolution MR image with iMRI acquisitions, we can, in turn, map the functional data and high-resolution anatomic information to iMRI images for improved tumor targeting. For the first step, we used a mutual information registration method. For the latter, we developed a robust slice to volume (SV) registration algorithm. Image data were acquired from patients and volunteers. Compared to our volume-to-volume registration that was previously evaluated to be quite accurate, the SV registration accuracy is about 0.5 mm for transverse images covering the prostate. With our image registration and fusion software, simulation experiments show that it is feasible to incorporate SPECT and high resolution MRI into the iMRI-guided treatment.

    Is the Sun a Magnet?

    Get PDF
    It has been argued (Gough and McIntyre in Nature394, 755, 1998) that the only way for the radiative interior of the Sun to be rotating uniformly in the face of the differentially rotating convection zone is for it to be pervaded by a large-scale magnetic field, a field which is responsible also for the thinness of the tachocline. It is most likely that this field is the predominantly dipolar residual component of a tangled primordial field that was present in the interstellar medium from which the Sun condensed (Braithwaite and Spruit in Nature431, 819, 2004), and that advection by the meridional flow in the tachocline has caused the dipole axis to be inclined from the axis of rotation by about 60∘ (Gough in Geophys. Astrophys. Fluid Dyn., 106, 429, 2012). It is suggested here that, notwithstanding its turbulent passage through the convection zone, a vestige of that field is transmitted by the solar wind to Earth, where it modulates the geomagnetic field in a periodic way. The field variation reflects the inner rotation of the Sun, and, unlike turbulent-dynamo-generated fields, must maintain phase. I report here a new look at an earlier analysis of the geomagnetic field by Svalgaard and Wilcox (Solar Phys.41, 461, 1975), which reveals evidence for appropriate phase coherence, thereby adding support to the tachocline theory

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Theoretical Evaluation of the Detectability of Random Lesions in Bayesian Emission Reconstruction

    Full text link
    Detecting cancerous lesion is an important task in positron emission tomography (PET). Bayesian methods based on the maximum a posteriori principle (also called penalized maximum likelihood methods) have been developed to deal with the low signal to noise ratio in the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the prior parameters in Bayesian reconstruction control the resolution and noise trade-off and hence affect detectability of lesions in reconstructed images. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Most research has been based on Monte Carlo simulations, which are very time consuming. Building on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers, here we develop a theoretical approach for fast computation of lesion detectability in Bayesian reconstruction. The results can be used to choose the optimum hyperparameter for the maximum lesion detectability. New in this work is the use of theoretical expressions that explicitly model the statistical variation of the lesion and background without assuming that the object variation is (locally) stationary. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predications and the Monte Carlo results
    corecore